7 research outputs found
A laboratory-based method for the evaluation of personalised search
Comparative evaluation of Information Retrieval Systems
(IRSs) using publically available test collections has become
an established practice in Information Retrieval (IR). By
means of the popular Cranfield evaluation paradigm IR test
collections enable researchers to compare new methods to
existing approaches. An important area of IR research where
this strategy has not been applied to date is Personalised
Information Retrieval (PIR), which has generally relied on
user-based evaluations. This paper describes a method that
enables the creation of publically available extended test collections to allow repeatable laboratory-based evaluation of
personalised search
Overview of the CLEF 2017 personalised information retrieval pilot lab (PIR-CLEF 2017)
The Personalised Information Retrieval (PIR-CLEF) Lab
workshop at CLEF 2017 is designed to provide a forum for the exploration of methodologies for the repeatable evaluation of personalised
information retrieval (PIR). The PIR-CLEF 2017 Lab provides a preliminary pilot edition of a Lab task dedicated to personalised search, while
the workshop at the conference is intended to provide a forum for the
discussion of strategies for the evaluation of PIR and extension of the
pilot Lab task. The PIR-CLEF 2017 Pilot Task is the first evaluation
benchmark based on the Cranfield paradigm, with the potential benefits
of producing evaluation results that are easily reproducible. The task is
based on search sessions over a subset of the ClueWeb12 collection, undertaken by 10 users by using a clearly defined and novel methodology.
The collection provides data gathered by the activities undertaken during the search sessions by each participant, including details of relevant
documents as marked by the searchers. The PIR-CLEF 2017 workshop
is intended to review the design and construction of this Pilot collection and to consider the topic of reproducible evaluation of PIR more
generally with the aim of launching a more formal PIR Lab at CLEF
201
Overview of the CLEF 2018 personalised information retrieval lab (PIR-CLEF 2018)
At CLEF 2018, the Personalised Information Retrieval Lab
(PIR-CLEF 2018) has been conceived to provide an initiative aimed at
both providing and critically analysing a new approach to the evaluation
of personalization in Information Retrieval (PIR). PIR-CLEF 2018 is the
first edition of this Lab after the successful Pilot lab organised at CLEF
2017. PIR CLEF 2018 has provided registered participants with the data
sets originally developed for the PIR-CLEF 2017 Pilot task; the data collected are related to real search sessions over a subset of the ClueWeb12
collection, undertaken by 10 users by using a novel methodology. The
data were gathered during the search sessions undertaken by 10 volunteer searchers. Activities during these search sessions included relevance
assessment of a retrieved documents by the searchers. 16 groups registered to participate at PIR-CLEF 2018 and were provided with the data
set to allow them to work on PIR related tasks and to provide feedback
about our proposed PIR evaluation methodology with the aim to create
an effective evaluation task
Overview of the CLEF 2018 personalised information retrieval lab (PIR-CLEF 2018)
At CLEF 2018, the Personalised Information Retrieval Lab
(PIR-CLEF 2018) has been conceived to provide an initiative aimed at
both providing and critically analysing a new approach to the evaluation
of personalization in Information Retrieval (PIR). PIR-CLEF 2018 is the
first edition of this Lab after the successful Pilot lab organised at CLEF
2017. PIR CLEF 2018 has provided registered participants with the data
sets originally developed for the PIR-CLEF 2017 Pilot task; the data collected are related to real search sessions over a subset of the ClueWeb12
collection, undertaken by 10 users by using a novel methodology. The
data were gathered during the search sessions undertaken by 10 volunteer searchers. Activities during these search sessions included relevance
assessment of a retrieved documents by the searchers. 16 groups registered to participate at PIR-CLEF 2018 and were provided with the data
set to allow them to work on PIR related tasks and to provide feedback
about our proposed PIR evaluation methodology with the aim to create
an effective evaluation task
A laboratory-based method for the evaluation of personalised search
Comparative evaluation of Information Retrieval Systems
(IRSs) using publically available test collections has become
an established practice in Information Retrieval (IR). By
means of the popular Cranfield evaluation paradigm IR test
collections enable researchers to compare new methods to
existing approaches. An important area of IR research where
this strategy has not been applied to date is Personalised
Information Retrieval (PIR), which has generally relied on
user-based evaluations. This paper describes a method that
enables the creation of publically available extended test collections to allow repeatable laboratory-based evaluation of
personalised search
Overview of the CLEF 2019 Personalised Information Retrieval Lab (PIR-CLEF 2019)
International audienc
Overview of the CLEF 2018 personalised information retrieval lab (PIR-CLEF 2018)
At CLEF 2018, the Personalised Information Retrieval Lab
(PIR-CLEF 2018) has been conceived to provide an initiative aimed at
both providing and critically analysing a new approach to the evaluation
of personalization in Information Retrieval (PIR). PIR-CLEF 2018 is the
first edition of this Lab after the successful Pilot lab organised at CLEF
2017. PIR CLEF 2018 has provided registered participants with the data
sets originally developed for the PIR-CLEF 2017 Pilot task; the data collected are related to real search sessions over a subset of the ClueWeb12
collection, undertaken by 10 users by using a novel methodology. The
data were gathered during the search sessions undertaken by 10 volunteer searchers. Activities during these search sessions included relevance
assessment of a retrieved documents by the searchers. 16 groups registered to participate at PIR-CLEF 2018 and were provided with the data
set to allow them to work on PIR related tasks and to provide feedback
about our proposed PIR evaluation methodology with the aim to create
an effective evaluation task